Lifted Optimization for Relational Preference Rules
ثبت نشده
چکیده
The move to relational probabilistic models from propositional models stemmed from the realization that the type of knowledge we have and need in many domains is at a level more generic than that of concrete objects. In reasoning about preference, too, often we have knowledge about the desirable behavior or state of a system of agents/objects, that applies to different instantiations of this system, while instantiations may differ in the number and properties of concrete objects. As an example, imagine the problem of monitoring emergency services in a large city. Our objects are fire-fighters, fire-engines, fire-events, injured civilians, and various devices that help us monitor the state of this system, such as cameras and other mounted and stationary sensors. These instruments transmit huge amounts of data to a control center, and our system must decide which information (e.g., which video streams) to display to a decision-maker monitoring this system at each point in time. As the set of objects and their properties change across time (e.g., as new fire events occur), the logic behind what is desirable and what is less desirable can (and must) be described at a generic level, so that it will apply to different concrete instances of such a monitoring system. In Brafman [2008] we proposed relational preference rules (RPR) as a formalism for modeling preference/value information about such a system. RPRs are similar in form to existing PRMs, such as Bayesian Logic [Kersting and Raedt, 2007], Relational Bayesian Networks [Jaeger, 1997], and Markov Logic [Richardson and Domingos, 2006], and can be viewed as relational UCP networks Boutilier et al. [2001], which induce a GAI value function over any given set of objects. We illustrate the basics of this formalism here, and refer the reader to Brafman [2008] for more details. In this paper we concentrate on lifted inference for relational preference rules which turns out to be quite different from lifted probabilistic inference rules.
منابع مشابه
Lifted Optimization for Relational Preference Rules
The move to relational probabilistic models from propositional models stemmed from the realization that the type of knowledge we have and need in many domains is at a level more generic than that of concrete objects. In reasoning about preference, too, often we have knowledge about the desirable behavior or state of a system of agents/objects, that applies to different instantiations of this sy...
متن کاملSolving a non-convex non-linear optimization problem constrained by fuzzy relational equations and Sugeno-Weber family of t-norms
Sugeno-Weber family of t-norms and t-conorms is one of the most applied one in various fuzzy modelling problems. This family of t-norms and t-conorms was suggested by Weber for modeling intersection and union of fuzzy sets. Also, the t-conorms were suggested as addition rules by Sugeno for so-called $lambda$–fuzzy measures. In this paper, we study a nonlinear optimization problem where the fea...
متن کاملLifted Parameter Learning in Relational Models
Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending ...
متن کاملRelational Linear Programs
We propose relational linear programming, a simple framework for combing linear programs (LPs) and logic programs. A relational linear program (RLP) is a declarative LP template defining the objective and the constraints through the logical concepts of objects, relations, and quantified variables. This allows one to express the LP objective and constraints relationally for a varying number of i...
متن کاملLifted Online Training of Relational Models with Stochastic Gradient Methods
Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending ...
متن کامل